Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[Bugfix] update should_ignore_layer #11354

Merged
merged 1 commit into from
Dec 21, 2024

Conversation

horheynm
Copy link
Contributor

@horheynm horheynm commented Dec 20, 2024

FIX ignore logic in Compressed Tensors utils.

Previously for fused layers, it automatically uses the shared_proj_names logic without considering the ignore list provided by the input arg.

Fix considers the ignore list for fused layers

Tested with this previously failing model:

vllm serve horheynm/Phi-3-mini-4k-instruct-kv_cache

Copy link

👋 Hi! Thank you for contributing to the vLLM project.
Just a reminder: PRs would not trigger full CI run by default. Instead, it would only run fastcheck CI which starts running only a small and essential subset of CI tests to quickly catch errors. You can run other CI tests on top of those by going to your fastcheck build on Buildkite UI (linked in the PR checks section) and unblock them. If you do not have permission to unblock, ping simon-mo or khluu to add you in our Buildkite org.

Once the PR is approved and ready to go, your PR reviewer(s) can run CI to test the changes comprehensively before merging.

To run CI, PR reviewers can do one of these:

  • Add ready label to the PR
  • Enable auto-merge.

🚀

@horheynm
Copy link
Contributor Author

@DarkLight1337 DarkLight1337 requested a review from mgoin December 20, 2024 04:11
Signed-off-by: George Ohashi <[email protected]>
@mgoin
Copy link
Member

mgoin commented Dec 20, 2024

Could you share an example of a config where it fails before this change?

I don't understand why we want to short-circuit the logic in this function even if some layers are ignored because what if some shards are ignored and others aren't. Basically I want to make sure we still will hit this error message

            # If shard_idx=1+ confirm scheme matches prior shards.
            elif should_ignore_shard != should_ignore_layer:
                raise ValueError(f"Found a different quantization schemes for "
                                 f"{shard_proj_names} in {layer_name}. vLLM "
                                 "requires all to use the same scheme.")

@horheynm
Copy link
Contributor Author

mple of a config where it fails before this change?

I don't understand why we want to short-circuit the logic in this function even if some layers are ignored because what if some shards are ignored and others aren't. Basically I want to make sure we still will hit this error message

Sure this is an example on llmcompressor

from datasets import load_dataset
from loguru import logger
from transformers import AutoModelForCausalLM, AutoTokenizer

from llmcompressor.transformers import oneshot

# Select model and load it.
MODEL_ID = "microsoft/Phi-3-mini-4k-instruct"
model = AutoModelForCausalLM.from_pretrained(
    MODEL_ID,
    device_map="auto",
    torch_dtype="auto",
)
tokenizer = AutoTokenizer.from_pretrained(MODEL_ID)

# Select calibration dataset.
DATASET_ID = "HuggingFaceH4/ultrachat_200k"
DATASET_SPLIT = "train_sft"

# Select number of samples. 512 samples is a good place to start.
# Increasing the number of samples can improve accuracy.
NUM_CALIBRATION_SAMPLES = 512
MAX_SEQUENCE_LENGTH = 2048

# Load dataset and preprocess.
ds = load_dataset(DATASET_ID, split=DATASET_SPLIT)
ds = ds.shuffle(seed=42).select(range(NUM_CALIBRATION_SAMPLES))


def process_and_tokenize(example):
    text = tokenizer.apply_chat_template(example["messages"], tokenize=False)
    return tokenizer(
        text,
        padding=False,
        max_length=MAX_SEQUENCE_LENGTH,
        truncation=True,
        add_special_tokens=False,
    )


ds = ds.map(process_and_tokenize, remove_columns=ds.column_names)

recipe = """
quant_stage:
  quant_modifiers:
    QuantizationModifier:
      ignore: ["lm_head"]
      kv_cache_scheme:
        {num_bits: 8, type: float, symmetric: true, strategy: tensor}
"""

# Apply algorithms.
oneshot(
    model=model,
    dataset=ds,
    recipe=recipe,
    max_seq_length=MAX_SEQUENCE_LENGTH,
    num_calibration_samples=NUM_CALIBRATION_SAMPLES,
)

# logger.info(
#     "Running sample generation. ",
#     "Note: Inference with the quantized kv_cache is not supported. ",
#     "Please use vLLM for inference with the quantized kv_cache.",
# )
# # Confirm generations of the quantized model look sane.
# print("\n\n")
# print("========== SAMPLE GENERATION ==============")
# input_ids = tokenizer("Hello my name is", return_tensors="pt").input_ids.to("cuda")
# output = model.generate(input_ids, max_new_tokens=100)
# print(tokenizer.decode(output[0]))
# print("==========================================\n\n")

# Save to disk compressed.
SAVE_DIR = MODEL_ID.split("/")[1] + "-FP8-KV-only"
model.save_pretrained(SAVE_DIR, save_compressed=True)
tokenizer.save_pretrained(SAVE_DIR)


from vllm import LLM
import torch
from vllm import LLM, SamplingParams
sampling_params = SamplingParams(temperature=0.80, top_p=0.95)

llm = LLM(model=SAVE_DIR)
outputs = llm.generate("Hello my name is", sampling_params)
print(outputs)

@horheynm
Copy link
Contributor Author

Could you share an example of a config where it fails before this change?

I don't understand why we want to short-circuit the logic in this function even if some layers are ignored because what if some shards are ignored and others aren't. Basically I want to make sure we still will hit this error message

            # If shard_idx=1+ confirm scheme matches prior shards.
            elif should_ignore_shard != should_ignore_layer:
                raise ValueError(f"Found a different quantization schemes for "
                                 f"{shard_proj_names} in {layer_name}. vLLM "
                                 "requires all to use the same scheme.")

Simple script

from vllm import LLM, SamplingParams

sampling_params = SamplingParams(temperature=0.80, top_p=0.95)

# https://huggingface.co/horheynm/Phi-3-mini-4k-instruct-kv_cache/blob/main/config.json
path = "horheynm/Phi-3-mini-4k-instruct-kv_cache"

llm = LLM(model=path)
outputs = llm.generate("Hello my name is", sampling_params)
print(outputs[0].outputs[0].text)

@mgoin
Copy link
Member

mgoin commented Dec 20, 2024

I see so this is to work around an unrelated issue to quantization of fused layers. LGTM then, thanks!

@mgoin mgoin added the ready ONLY add when PR is ready to merge/full CI is needed label Dec 20, 2024
@mgoin mgoin enabled auto-merge (squash) December 20, 2024 15:22
@mgoin mgoin merged commit 51ff216 into vllm-project:main Dec 21, 2024
68 checks passed
BKitor pushed a commit to BKitor/vllm that referenced this pull request Dec 30, 2024
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
ready ONLY add when PR is ready to merge/full CI is needed
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants